reinforcement learning model for energy storage optimization Search Results


94
MathWorks Inc reinforcement learning models
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Reinforcement Learning Models, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 94/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/reinforcement learning models/product/MathWorks Inc
Average 94 stars, based on 1 article reviews
reinforcement learning models - by Bioz Stars, 2026-03
94/100 stars
  Buy from Supplier

90
SoftMax Inc reinforcement learning model
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Reinforcement Learning Model, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/reinforcement learning model/product/SoftMax Inc
Average 90 stars, based on 1 article reviews
reinforcement learning model - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Abbott Laboratories q-learning model
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Q Learning Model, supplied by Abbott Laboratories, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/q-learning model/product/Abbott Laboratories
Average 90 stars, based on 1 article reviews
q-learning model - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
IEEE Access deep reinforcement learning model
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Deep Reinforcement Learning Model, supplied by IEEE Access, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/deep reinforcement learning model/product/IEEE Access
Average 90 stars, based on 1 article reviews
deep reinforcement learning model - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Insilico Medicine gai model gentrl
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Gai Model Gentrl, supplied by Insilico Medicine, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/gai model gentrl/product/Insilico Medicine
Average 90 stars, based on 1 article reviews
gai model gentrl - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Uber Technologies Inc go-explore variants
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Go Explore Variants, supplied by Uber Technologies Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/go-explore variants/product/Uber Technologies Inc
Average 90 stars, based on 1 article reviews
go-explore variants - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

96
MathWorks Inc neural network toolbox
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Neural Network Toolbox, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 96/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/neural network toolbox/product/MathWorks Inc
Average 96 stars, based on 1 article reviews
neural network toolbox - by Bioz Stars, 2026-03
96/100 stars
  Buy from Supplier

90
Siemens AG reinforcement learning models
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Reinforcement Learning Models, supplied by Siemens AG, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/reinforcement learning models/product/Siemens AG
Average 90 stars, based on 1 article reviews
reinforcement learning models - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Deepmind Technologies Ltd recurrent network with model-free reinforcement learning
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Recurrent Network With Model Free Reinforcement Learning, supplied by Deepmind Technologies Ltd, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/recurrent network with model-free reinforcement learning/product/Deepmind Technologies Ltd
Average 90 stars, based on 1 article reviews
recurrent network with model-free reinforcement learning - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
MathWorks Inc neural network toolbox tm
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Neural Network Toolbox Tm, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/neural network toolbox tm/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
neural network toolbox tm - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
MathWorks Inc matlab r2016a
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Matlab R2016a, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/matlab r2016a/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
matlab r2016a - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

90
Janssen reinforcement learning models of cognition
Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with <t>reinforcement</t> history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).
Reinforcement Learning Models Of Cognition, supplied by Janssen, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/reinforcement learning models of cognition/product/Janssen
Average 90 stars, based on 1 article reviews
reinforcement learning models of cognition - by Bioz Stars, 2026-03
90/100 stars
  Buy from Supplier

Image Search Results


Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with reinforcement history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).

Journal: Proceedings of the National Academy of Sciences of the United States of America

Article Title: Callousness, exploitativeness, and tracking of cooperation incentives in the human default network.

doi: 10.1073/pnas.2307221121

Figure Lengend Snippet: Fig. 1. Rationale, design, and analytic approach. Individuals learn from experience by selecting an action, observing its outcome, and updating the expected reward value of future actions. Value updates are made using PEs, which reflect the discrepancy between expected and obtained outcomes such that better- than-expected outcomes lead to positive PEs and worse-than-expected outcomes lead to negative PEs. Other salient social information can also be integrated with experience to influence social decision-making, as when the reputation of one’s partner predicts decisions to trust them even when reputation is unrelated to the partner’s actual behavior (27, 41). (A) Consider the decision about whether to buy a friend a holiday gift. Reputational information (e.g., news of a friend’s immoral behavior; blue circle) can be integrated with reinforcement history (e.g., did the friend buy you a gift last year? green circle) to affect one’s policy toward their social counterpart. Critically, decisions that correctly anticipate the behavior of one’s social partner (correctly predicting they bought you a gift, or correctly predicting they did not buy you a gift) yield positive PEs according to our policy model, leading to a cycle of reciprocity even when no actual reward is received. (B) Participants played a modified iterative social trust game with three fictional trustees, in which they had the option to keep an initial endowment or invest it in the hopes of increasing their profit if the trustee also invested. Pretask vignettes were used to manipulate trustees’ reputations (blue box). To manipulate reinforcement history, trustees returned at varying rates across rich, poor, and neutral blocks (green box). On trials where participants kept, counterfactual feedback about what the trustee would have chosen was provided, even though it did not affect the trial payout. (C) The policy model posits that individuals learn from social feedback to optimize their approach, or policy, toward their social counterpart, leading to better anticipation of the counterpart’s behavior. One implication of this is that correct predictions of the trustee’s behavior will lead to positive reward PEs, even if no actual reward is provided. This can be seen by comparing the expected direction of PEs in a model in which participants track actual rewards (column 3) vs. a model in which they track the success of their policy toward the counterpart (column 4). We propose that policy PEs are primarily encoded within the brain’s default network. Image credit: Default network figure adapted from ref. 35. (D) MSEM was used to evaluate whether between-person variables (e.g., policy PEs encoded in the default network) moderate the effect of design variables on trial-level decision-making. Personality traits were introduced as between-person predictors of policy PEs in the default network. Formal tests of mediation were then used to examine the indirect effect of traits on behavior via learning signals (highlighted lines).

Article Snippet: Reinforcement learning models were fit using the variational Bayesian analysis toolbox in MATLAB, which leverages parameter estimates from the full sample to constrain individual parameter estimates, reducing the risk of misestimation in poorly performing participants (64).

Techniques: Modification